Tag
41 articles
Anthropic and OpenAI are clashing over a proposed Illinois law that would shield AI labs from liability in cases of mass deaths or financial disasters. The legislation, backed by OpenAI, has raised concerns among safety advocates.
Silicon Valley's biggest names are spending millions to stop Alex Bores, a former Palantir employee who helped pass one of the country's toughest AI laws, from running for Congress.
Stanford’s 2026 AI Index reveals a widening gap between expert optimism and public anxiety, especially among Gen Z. The report highlights declining trust in government AI regulation and rising employment concerns among younger workers.
A U.S. appeals court has refused to block the Pentagon's blacklisting of Anthropic, a major AI company, allowing the designation as a national security risk to remain in effect.
This article explains the EU's ban on fully AI-generated content in official communications and its implications for AI governance, transparency, and public trust.
California has enacted its own AI regulations for state contractors, taking a proactive stance against federal inaction and setting new standards for AI governance.
A federal judge has blocked the Trump administration's ban on Anthropic AI models, calling the labeling of the company a 'potential adversary' unconstitutional. The ruling emphasizes First Amendment protections in the context of AI regulation.
Learn about the EU AI Act, how it categorizes AI risks, and why European lawmakers are delaying some regulations while banning certain AI apps that create inappropriate content.
Senator Bernie Sanders proposes a moratorium on data center construction to allow time for AI safety assessments. Representative Alexandria Ocasio-Cortez plans to introduce similar legislation in the House.
Senate Democrats are working to codify Anthropic's ethical boundaries for AI use in autonomous weapons and surveillance, ensuring human oversight in critical decisions.
Senators Bernie Sanders and Alexandria Ocasio-Cortez propose a ban on new data center construction until comprehensive AI regulation is established.
A federal judge questioned the Pentagon's motivations for labeling Anthropic, the developer of Claude AI, as a supply-chain risk, suggesting the move may be an attempt to stifle the company's growth.